facebook claim
Facebook claims its AI can predict four if a coronavirus patient's condition will deteriorate
Facebook claims to have designed software capable of predicting if a coronavirus patient's health will deteriorate or will need oxygen just by scanning their chest X-rays. Working with New York University (NYU), the social media firm says the system can calculate such developments four days. Together they have built three machine-learning models to assist doctors better prepare as cases around the world continue to rise. One model is designed to predict deterioration using a single chest X-ray, another does the same but through a series of X-rays and the third uses an X-ray to determine if and how much supplemental oxygen a patient may need. Facebook and NYU built three machine-learning models to assist doctors better prepare as cases around the world continue to rise.
- North America > United States > New York (0.26)
- Asia > China > Hubei Province > Wuhan (0.06)
- Health & Medicine > Therapeutic Area > Infections and Infectious Diseases (0.97)
- Health & Medicine > Therapeutic Area > Immunology (0.97)
- Health & Medicine > Diagnostic Medicine > Imaging (0.75)
Facebook's redoubled AI efforts won't stop the spread of harmful content
Facebook says it's using AI to prioritize potentially problematic posts for human moderators to review as it works to more quickly remove content that violates its community guidelines. The social media giant previously leveraged machine learning models to proactively take down low-priority content and left high-priority content reported by users to human reviewers. But Facebook claims it now combines content identified by users and models into a single collection before filtering, ranking, and deduplicating it and handing it off to thousands of moderators, many of whom are contract employees. Facebook's continued investment in moderation comes as reports suggest the company is failing to stem the spread of misinformation, disinformation, and hate speech on its platform. Reuters recently found over three dozen pages and groups that featured discriminatory language about Rohingya refugees and undocumented migrants.
- Oceania > Papua New Guinea (0.05)
- North America > United States > Wisconsin > Kenosha County > Kenosha (0.05)
- North America > United States > New York (0.05)
- (3 more...)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Law (1.00)
- Information Technology > Services (1.00)
- (2 more...)
Facebook claims its new chatbot beats Google's as the best in the world
Blender's ability comes from the immense scale of its training data. It was first trained on 1.5 billion publicly available Reddit conversations, to give it a foundation for generating responses in a dialogue. It was then fine-tuned with additional data sets for each of three skills: conversations that contained some kind of emotion, to teach it empathy (if a user says "I got a promotion," for example, it can say, "Congratulations!"); information-dense conversations with an expert, to teach it knowledge; and conversations between people with distinct personas, to teach it personality. The resultant model is 3.6 times larger than Google's chatbot Meena, which was announced in January--so big that it can't fit on a single device and must run across two computing chips instead. At the time, Google proclaimed that Meena was the best chatbot in the world.
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
Facebook Claims The Epoch Times Used Artificial Intelligence to Push Pro-Trump Conspiracies - Black Enterprise
Facebook had to take down more than 600 accounts tied to The Epoch Times for using false identities created by artificial intelligence to push conspiracies and theories pertaining to a variety of topics including impeachment and the upcoming elections. "What's new here is that this is purportedly a U.S.-based media company leveraging foreign actors posing as Americans to push political content. We've seen it a lot with state actors in the past," Facebook's head of security policy, Nathaniel Gleicher, said in an interview.
Facebook under fire for its facial recognition AI amid claims it scans EVERY photo
Facebook has come under fire for its controversial facial recognition technology. The social media giant primarily uses it to assist in tagging users in photos, but consumer groups and advocates say it may violate users' privacy, according to the New York Times. The scrutiny comes as Facebook continues to grapple with the fallout from the Cambridge Analytica scandal. Facebook has come under fire for its controversial facial recognition technology. Privacy advocates have specifically taken issue with how Facebook markets its facial recognition technology, telling users that it can'help protect you from a stranger using your photo to impersonate you.' Proponents of the technology say it can even be an effective tool for spotting criminals.
- Asia > China (0.07)
- North America > United States > Illinois (0.05)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Vision > Face Recognition (1.00)